![]() METHOD IMPLEMENTED BY COMPUTER AND SYSTEM
专利摘要:
computer-implemented method, system, and computer-readable storage media. The present invention relates to described techniques that allow efficient exchange of memory pages (204) to and from a working set (102, 322) of pages for a process through the use of large writes and reads of pages and to sequentially ordered locations on secondary storage. when writing pages from a working set (102, 322) of a process to secondary storage, the pages can be written to reserved, contiguous locations in a dedicated swap file (104, 320) according to a virtual address order or other order. such writing to sequentially ordered locations allows reading groups of pages into large sequential blocks of memory (204), providing the most efficient read operations to return pages to physical memory (204). 公开号:BR112014014274B1 申请号:R112014014274-2 申请日:2012-12-14 公开日:2021-09-14 发明作者:Mehmet Iyigun;Yevgeniy Bak;Landy Wang;Arun U. Kishan 申请人:Microsoft Technology Licensing, Llc; IPC主号:
专利说明:
background [001] Computer systems use main memory (often referred to as physical memory) to run processes that include programs or software applications. In modern systems, this main memory usually includes volatile memory such as random access memory (RAM). The operating system (OS) can assign each process a number of memory pages to use while the process is running in physical memory. However, active processes can use more physical memory than is available on the system. In such cases, virtual memory can be used to complete the physical memory used by active processes, rather than keeping all process pages in physical memory. [002] Virtual memory can be implemented by writing one or more pages of a process to nonvolatile memory on secondary storage (eg, a hard disk), and reading the pages back to physical memory as needed. For example, when data is not being actively used by a process, pages containing that data can be written to secondary storage thus freeing up space in physical memory. This process of reading and writing pages between physical memory and virtual memory is often referred to as paging, and the secondary storage space for writing pages is often referred to as a paging file. The speed and efficiency with which this paging takes place for a process can affect system performance and the user experience. summary [003] Traditional paging operations tend to page individual pages from physical memory to secondary storage based on the memory requirements of currently active processes, and this can lead to fragmentation of secondary storage. This patent application describes techniques for efficiently switching one or more pages to and from a working set of pages to a process through the use of large page writes and reads to and from sequentially ordered locations in secondary storage. . Due to the fact that reads and writes tend to be more efficient in cases where they can be ordered sequentially and/or in larger blocks of memory, the techniques described here use large reads and writes pages to e of sequentially ordered locations in physical memory during swap operations. When swapping pages from physical memory to secondary storage, pages can be written to reserved, contiguous locations in a dedicated swap file according to virtual address order. Such recording can make it possible to swap pages in large blocks of memory sequentially ordered, providing more efficient internal swapping. [004] This summary is provided to introduce a variety of concepts in a simplified form, which is described in more detail below in the Detailed Description. This summary is not intended to identify the key features or essential characteristics of the matter claimed, nor is it intended to be used to limit the scope of the matter claimed. Brief Description of Drawings [005] The detailed description is described with reference to the attached figures. In figures, the leftmost digit(s) of a reference number identify(s) the figure in which the reference number appears first. The same reference numbers in the different figures indicate similar or identical elements. [006] Figures 1A and 1B illustrate an example for recording pages from a working set of a process in sequentially ordered locations of an exchange file, according to the modalities; [007] Figure 2 is a schematic diagram representing an example computing system, according to the modalities. [008] Figure 3 is a schematic diagram of the components of the example operating system, according to the modalities; [009] Figure 4A shows a flow diagram of an illustrative process to change the pages of a working set, according to the modalities. [0010] Figure 4B shows a flow diagram of an illustrative process for recording swapped pages in a swap file, according to the modalities; [0011] Figure 5 shows a flow diagram of an illustrative process for switching pages from an exchange file to a working set, according to the modalities. [0012] Figure 6 shows a flow diagram of an illustrative process for dynamically managing an exchange file, according to the modalities. Detailed Description Overview [0013] The modalities described here allow the more efficient exchange of memory pages of the working set of a process through the use of writes and reads of large blocks of memory to and from places ordered sequentially in secondary storage. Processes running in physical memory, such as random access memory (RAM), may require more physical memory than is available on the system. In such cases, a traditionally configured memory manager or other operating system component can implement paging operations to free some physical memory by writing one or more pages of memory for a process to a paging file on secondary storage. In traditional paging, individual pages of a process can be written (ie, paged) to free physical memory as it is needed, and pages can be paged back into physical memory when desired, when a process wants access them (for example, when a page fault occurs). Such traditional paging of individual pages as needed is often referred to as desired paging, and can lead to input/output (I/O) operations to secondary storage that are random and small, with multiple pages stored in non-contiguous storage space and without any specific order. [0014] In addition, I/O operations (for example, read and write) in computer systems are, in general, more efficient when performed sequentially and in large orders. For example, on systems using solid-state disks, I/O operations that are ordered sequentially can lead to efficiencies of a factor of two to three over requests for random locations. In many cases, larger sequential orders can yield similar gains relative to the size of smaller sequential orders. Furthermore, in systems using rotating disks, the efficiency gain can be as large as fifty times. With this in mind, the modalities described here allow for efficient swapping through the use of I/O operations to read and write larger sets of pages to and from sequentially ordered locations in a swap file. [0015] As used herein, the term page can refer to a block of memory used by a process during its execution. When the process is active, a page can be in physical memory where it is accessible to the process. A memory manager or other component of an operating system (OS) can remove one or more pages of physical memory and write them to secondary storage. Pages can be read into physical memory by copying them back from secondary storage. [0016] In one or more modalities, pages can include the private pages for a process. As used here, private pages can refer to those pages that are owned or dedicated to a particular process and used by no other process, such as a stack allocated to the process. Other types of pages can include shareable pages that are used by various processes, such as a file mapping. Some modalities support efficient exchange of private pages. In addition, some modalities may also support swapping shared pages in paging file. These types of pages can be stored in a single location (for example, in a paging file), and can be referenced by an ID or an indication. A process can pass an ID of a shared page in a paging file to another process, and the page can persist in memory while a process maintains an ID to it. [0017] As used herein, the term working set can refer to a set of pages for a process. Modalities support various types of working sets. For example, a working set might include pages for data that is accessed and/or referenced by a process during a certain time span. Such a working set can provide an approximation to a set of pages likely to be accessed by the process in the near future (for example, over a near period of time) such that it may be desirable to keep the pages in physical memory for ready access by the process. However, the modalities are not as limited and can support other types of page sets like a working set. For example, a working set for a process might be those pages that are accessible directly from a processor without going through the OS. [0018] As used here, the term swap file can refer to space reserved on secondary storage (eg, a hard disk) and used for swapping pages in or out of physical memory. In some embodiments, the swap file is a dedicated swap file that is separate from the paging file used for traditional paging operations. In some embodiments, the swap file can be part of the paging file. In some embodiments, the swap file can be initialized by a memory manager or other operating system component and its size can be dynamically managed, as described here with respect to figure 6. [0019] The modalities present the external swapping of pages from a working set to a process, and for recording one or more swapped pages from the working set to the swap file in secondary storage. As discussed above, the modalities allow for efficient swapping by providing reading and/or writing of pages in large groups to and from sequentially ordered locations in a swap file. Figures 1A and 1B illustrate an example of external exchange and recording for one or more modalities. [0020] Figure 1A shows a working set 102 of a given process, Process X, and a swap file 104 in secondary storage. The determination can be made to swap one or more pages of working set 102 from physical memory. An OS component such as a memory manager can then identify one or more working set pages 102 that are candidate pages to swap, as the working set private pages. The total size of identified candidate pages can be calculated. Then, the placeholder 106 can be reserved in the swap file 104 in one operation 108, the placeholder sufficient to store the candidate pages. In addition, a location for each candidate page can be reserved in placeholder 106, the locations ordered sequentially according to the virtual address order of the candidate pages. At this point, in some embodiments, the candidate pages can be said to have been swapped, even though no candidate pages have been recorded in the swap file. [0021] Figure 1B illustrates one or more candidate pages 110 in working set 102. In one or more write operations 112, one or more candidate pages 110 are written to placeholder 106. As shown in Figure 1B, each page Recorded candidate can be recorded in your private reserved location. In some embodiments, although candidate pages 110 may be non-contiguous in the working set, they are written to contiguous locations in the swap file, as shown in Figure 1B. Writing candidate pages to a contiguous sequentially ordered placeholder 106 can allow a subsequent read operation to read a large, sequentially ordered block from the swap file when the pages are read back to working set 102 during the subsequent internal exchange operations. Thus, in some modalities, recording candidate pages in sequentially ordered locations in the swap file allows for efficient future reading that is large and/or sequentially ordered. External swapping and saving pages are described in more detail with respect to Figure 4. The modalities also feature in-interchange of pages, including reading pages from the swap file to the working set (ie, return the pages to physical memory for use by the process). The internal switching of pages is described in more detail with respect to figure 5. [0022] In some embodiments, the decision to switch out and remove one or more pages from the working set of a process can be made by a policy manager or other operating system component, based on various conditions. For example, a determination can be made that a process is suspended, inactive, or for some reason less active (for example, accessing fewer pages) than other active processes on the computing device. In such cases, some or all of the working set defined for the process can be removed from the working set to free more physical memory for use by other processes. However, to create a smooth user experience, it might be desirable to read the pages back into the working set as efficiently as possible during internal swapping so that the process becomes active quickly. Efficient switching through the use of large and/or sequential I/O operations can enable rapid reactivation of a process and therefore predict better performance in a computing device that switches between active processes. Illustrative Computing Device Architecture [0023] Figure 2 shows a diagram of an example computer system architecture in which the modalities can operate. As shown, computer system 200 includes processing unit 202. Processing unit 202 may comprise a number of processing units, and may be implemented as hardware, software, or some combination thereof. Processing unit 202 may include one or more processors. As used herein, the processor refers to a hardware component. The processing unit 202 may include computer executable, processor executable and/or machine executable instructions, recorded in any programming language suitable to perform various functions described herein. [0024] Computing device 200 further includes system memory 204, which may include volatile memory such as random access memory (RAM) 206, static random access memory (SRAM), dynamic random access memory (DRAM), and so on. RAM 206 includes one or more execution OS 208, and one or more execution processes 210, which include the components, programs, or applications that are loadable and executable by processing unit 202. Thus, in some embodiments, RAM 206 may include physical memory, in which an OS 208 or 210 processes run. [0025] System memory 204 may further include non-volatile memory, such as read-only memory (ROM) 212, flash memory, and so on. As shown, ROM 212 may include a basic input/output system (BIOS) 214 used to initialize computer system 200. Although not shown, system memory 204 may promote storage of program or component data that they are generated and/or used by the operating system(s) 208 and/or processes 210 during their execution. System memory 204 may also include cached memory. [0026] As shown in Figure 2, computing device 200 may also include non-removable memory 230 and/or removable storage 234 which includes but is not limited to magnetic disk storage, optical disk storage, storage on tape, and the like. Disk drives and associated computer-readable media can provide the computer-readable non-volatile storage instructions, data structures, programming modules, and other data for the operation of computer system 200. In addition, the storage does not Removable 230 may further include a hard disk 232. In some embodiments, the hard disk 232 may provide secondary storage for use in the swap operations described herein. [0027] In general, computer-readable media includes computer media and media. [0028] Computer storage media include volatile and non-volatile, removable and non-removable media implemented in any method or technology for storing information, such as computer readable instructions, data structure, programming modules and other data. Computer storage media include, but are not limited to, RAM, ROM, programmable and erasable read-only memory (EEPROM), SRAM, DRAM, flash memory or other memory technology, read-only memory, compact disk (CD) -ROM), digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk or other magnetic storage devices, or any other transmission medium that can be used to store access information by a device. computing. [0029] In contrast, the communication media may incorporate computer readable instructions, data structures, program modules or other data into a modulated data signal such as a carrier wave or other transmission mechanism. As defined here, digital media does not include the media. [0030] Computer system 200 may include input device(s) 236, which includes, but is not limited to, a keyboard, mouse, pen, game controller, a voice input device for voice recognition, a touch input device, and the like. Computer system 200 may further include output device(s) 238, which includes, but is not limited to, a monitor, a printer, sound speakers, a haptic output, and the like. Computing system 200 may further include communication connection(s) 240 which allow system 200 to communicate with other computing devices 242, including client devices, server devices, databases. data and/or other network devices available on one or more computer communication networks. [0031] Figure 3 presents an example of representation of the computing system 200 and OS 208, according to the modalities. As shown, in some embodiments, OS 208 includes one or more components, such as policy manager 302 and memory manager 304. In some embodiments, policy manager 302 determines when pages should be swapped or swapped in a set of a process running in the physical memory of computing system 200. In some embodiments, policy manager 304 can be described as a process life manager for OS 208 that decides when processes should be suspended, reactivated, and /or denounced under various conditions. [0032] As shown in figure 3, OS 208 can communicate with hard disk 232 and RAM 206, for example, through a communication bus 200 of the computing system. Hard disk 232 may include secondary storage for use by OS 208 in paging or swapping operations. In some embodiments, hard disk 232 includes one or more paging files 318 and/or one or more swap files 320. In some embodiments, paging file(s) 318 and/or image swap files 320 is/are initialized by memory manager 304 for use in paging or swap operations. In the example shown, swap file(s) 320 is (are) separate from paging file(s) 318. However, in some embodiments, the file(s) swap 320 may be part of paging file(s) 318. RAM 206 may include the physical memory in which processes run, and may include one or more working sets 322 for such processes. [0033] In addition, memory manager 304 may include one or more components that operate to perform page switching operations as described herein. In some embodiments, memory manager 304 includes page table 306 that maps a virtual address to the location of each page either in physical memory or the paging file for paging operations. Some modalities may use page 306 tables to store information for candidate page placeholders in the swap file when candidate pages are swapped out of the working set for a process (as described below with reference to Figure 4A). In some embodiments, reservation space in the exchange file may proceed as described in U.S. Patent Application Serial No. 13/042,128 entitled "Pagefile Reservations" filed March 7, 2011. [0034] Memory manager 304 may also include modified list 308 and/or wait list 310. In some embodiments, after candidate pages have been swapped out of the working set and locations have been reserved for them in the file In exchange, at some point later the candidate pages can be removed from the working set and placed in modified list 308 to be written by a writer (eg, page writer 312). Then, as each page is written to swap file 320, address information for the page may be pulled from modified list 308 in waitlist 310. In some embodiments, waitlist 310 maintains control of pages that have not yet been removed from physical memory, although they have been written to the swap file. In such cases, if the search process accesses those pages that can still be accessed directly in physical memory without being swapped back. However, if memory manager 304 requires more pages of physical memory for other processes that can allocate those pages that are on the wait list. In some embodiments, memory manager 304 also includes page recorder 312, which operates writing working set pages 322, paging 318 and/or swap file 320 (e.g., swapping pages), and/or read pages back in working set 322 (eg swapping pages). Illustrative Processes [0035] Figures 4A, 4B, 5 and 6 describe the flowcharts that show the example processes according to the various modalities. The operations of these processes are illustrated in the individual blocks and summarized with reference to the blocks. Processes are illustrated in the form of logical flow charts, each operation can represent one or more operations that can be implemented in hardware, software, or a combination thereof. In the context of software, operations represent executable computer instructions stored on one or more computer storage media which, when performed by one or more processors, allow one or more processors to perform the recited operations. In general, executable computer instructions include routines, programs, objects, component modules, data structures, and the like that perform particular functions or implement particular types of abstract data. The order in which the operations are described is not intended to be construed as a limitation, and any number of operations described may be combined in any order, subdivided into several sub-operations, and/or run in parallel to implement the described processes. [0036] Figure 4A illustrates an example process to change the pages of a job created according to the modalities. This external swap process can be performed by one or more OS 208 components, such as memory manager 304 or policy manager 302. In 402, a decision is considered to swap one or more pages of a working set of a process in a swap file. In some embodiments, this decision can be made by policy manager 302 based on various criteria. In some cases, the decision to perform the external switch may be based on the determination that a process is inactive or suspended, that one or more threads associated with the process are not in an active state for a certain period of time, that the process has the basis for a period of time, that the process has not used a certain number of pages for a period of time, or that the computing system as a whole has been suspended and/or inactive. [0037] Once the decision to perform the external exchange has been made, in 404, one or more candidate page(s) is(are) identified for the exchange of a process working set. In some embodiments, the memory manager will analyze each page in the working set and determine if each page is a candidate for external swap based on certain criteria. In some modalities, candidates for external exchange may include private pages and/or shared pages in a paging file in the working set. In some embodiments, candidate pages can be identified based on whether these pages are clean, that is, pages that have been saved to a paging file but have not since been modified, such that the current version of the page in physical memory is the same as the page in the pagefile. Dirty pages are those pages that may have changed since they were written to a paging file, or that have not yet been written to a paging file. Also, in some modalities, whether the page is locked or not in memory can be considered when deciding whether the page is a candidate for external swapping. [0038] At 406, space is reserved in the swap file based on a calculated total size of the identified candidate pages (for example, placeholder 106). At 408, the location is assigned or reserved in the swap file placeholder for each candidate page. In some embodiments, locations are reserved in virtual address order according to the virtual addresses of candidate pages in the working set. So, even though the candidate pages are non-contiguous within the created job, their locations in the swap file can be contiguous. The contiguous, sequentially ordered locations of candidate pages in the swap file placeholder can allow future reading from the swap file to be performed in large blocks, sequentially ordered to provide efficient internal swapping. In some embodiments, reservation of swap file space may proceed as described in US Patent Application Serial No. 13/042,128 entitled "Pagefile Reservations" filed March 7, 2011. Candidate pages were reserved in 408, these candidate pages can be said to have been swapped externally. At 410, a list (or other data structure) of the candidate pages swapped externally is updated. In some modalities, this list is updated when places are booked in 408. [0039] Once the locations have been reserved and the list updated, sometime later the memory manager may choose to write some or all of the swapped pages externally to their reserved locations in the swap file. Figure 4B represents an example of a process for recording externally exchanged pages, according to the modalities. At 412, the decision is made to write one or more of the identified candidate pages to the swap file (for example, swap working set pages). In some modalities, this decision can be made based on a determination that a certain time limit has elapsed during which the criteria that led to the decision to carry out the external exchange (in 402) are still valid. For example, a certain period of time (eg 5 minutes) may pass, in which a process is still inactive or suspended. In some embodiments, the decision can be considered based on a determination by the memory manager that more physical memory is needed for use by one or more active processes. [0040] If the decision is made at 412 to write one or more candidate pages to the swap file, at 414 one or more candidate pages to be written may be removed from the working set. In some embodiments, all candidate pages are removed and written to the swap file in one or more write operations. In some modalities, a part of one of the candidate pages is removed. In some embodiments, the recorded candidate pages can be determined based on pressure for memory (for example, based on the memory manager's need for more physical memory). In some modalities, the decision which candidate pages should be switched externally may be based on how recently those pages were accessed by the process. For example, the memory manager can choose to externally swap out those pages that were least recently used and that were not accessed by the process within a certain period of time. [0041] In some embodiments, the removal of candidate pages can be optimized by removing them from the created work in an order consistent with the order in which locations for candidate pages were reserved in the swap file. For example, candidates can be removed from the working set in virtual address order, such as in cases where places have been reserved in virtual address order. In some embodiments, the removal of candidate pages from the working set can be arbitrary and/or random (for example, unordered), although the candidate pages have still been written to sequentially ordered locations in the swap based file. in virtual address order. In some embodiments, pages can remain in the work created in physical memory (for example, even after these pages are written to the swap file), until the memory manager needs to use these pages for active processes. [0042] At 416, candidate pages are written to the swap file. In some embodiments, candidate pages are written to their reserved places which are sequentially ordered in the swap file according to their virtual address order (for example, in ascending or descending order according to the virtual addresses of the pages in the working set). In addition, the write operations themselves can be optimized and/or performed more efficiently by grouping candidate pages to write to large blocks of memory as much as possible, and/or performing the write operations, in sequential order of virtual address. In some embodiments, recording page 312 performs the recording operations for recording candidate pages to their reserved locations in the swap file. In cases where candidate pages have not been assigned a placeholder, the pages can be written to the paging file. In some modalities, the addresses of the recorded pages can be saved by the memory manager in a data structure to be used for reading in the swap file pages. [0043] In some cases, the decision can be made in 412 not to save the changed pages. In such cases, in 418, reserved locations can be held until internal exchange takes place (as depicted in Figure 5). In some embodiments, the decision can be made not to save candidate pages to the swap file if the conditions that led to the initial swap decision (eg in 402) are no longer present after a certain timeout period. For example, the process can be active again or can no longer be suspended. In other cases, a condition for the internal swapping of pages may occur before these candidate pages have been written to the swap file, for example, if one or more of the candidate pages are accessed by the process (for example, the conditions that would lead to a page fault if these pages are removed from physical memory). [0044] Some modalities support an optimization in which pages are removed from the working set for a period of time after the decision to swap externally has been made in 402. In addition, page groups can be swapped and/or recording to the archive page according to usage patterns. For example, the memory manager can swap and/or write a first group of pages that is most recently accessed by the process (eg accessed in the last 5 seconds), after a period of time, swapping and/or writing a second page group, which is least recently accessed (for example, access between the last 5 seconds and the last 10 seconds), and so on. In some cases, these groups may instead be less recently accessed to more recently accessed. In some modalities, the external exchange and/or saving of pages can be based on a certain activity or phase of the execution process. For example, an application might enter a special phase or perform a certain behavior, and pages related to that phase or behavior (or not related to that phase or behavior) can be swapped out by the memory manager. [0045] In addition, the modalities support various combinations of optimizations. For example, page groups can be swapped and similar groups or groups determined differently can be written to the swap file. As another example, an entire working set of pages can be swapped and then groups of pages are identified for writing to the swap file as described here. [0046] Also, in some embodiments, swapping one or more pages of a process's working set can be described as saving a process's current state to secondary storage, so that when the memory manager swaps the pages swapped, the process is restored to the state it was in when, for example, it was suspended or became inactive. In some modalities, an execution process may request access to pages that were exchanged externally before they were exchanged internally. In such cases, the memory manager can determine the internal swap of all externally swapped pages, the internal swap of those pages for which access is requested, and/or the internal swap of one or more pages that are close to the ) requested page(s) in the virtual address space. This can include releasing reserve locations for pages swapped internally in the swap file. [0047] Figure 5 represents an example of a process for exchanging secondary storage pages for a work process defined in physical memory, according to the modalities. In some embodiments, the internal swap process is performed by one or more components of OS 208, such as memory manager 304 or policy manager 302. In 502, a decision or determination is made to swap one or more pages that were previously exchanged externally from a job defined for the process. In some embodiments, this decision is considered by policy manager 302, and can be made based on various criteria. Such criteria may include receiving an indication that the process is no longer inactive or suspended, or that the process seeks to access one or more pages that are not in physical memory (for example, a page fault has occurred). In some modalities, the criteria may include the cessation of conditions that led to the foreign exchange decision in 402. [0048] In 504, one or more pages previously swapped externally are identified for internal swapping. By the time the decision is made to swap in 502, one or more of the pages swapped out can no longer be in physical memory. For example, the memory manager might have used one or more pages swapped externally to other processes. Thus, in 504, identifying pages for internal swapping may include determining those externally swapped pages that are no longer in physical memory. In some embodiments, the identification of pages for internal exchange can be based on those pages whose addresses were saved in a data structure, the memory manager when they were exchanged externally. In some embodiments, access attempts by a process of one or more pages (eg page faults) can cause the pages to be swapped internally. [0049] At 506, a determination is made whether one or more pages identified as previously swapped out have been written to the swap file as described with respect to figure 4B above and whether the pages have not left physical memory. In some modalities, in 508, if the pages were written to the swap file and did not leave physical memory, the pages identified for the internal swap are read from their locations in the swap file back to work created in physical memory . In some cases, pages may have been removed from the working set but have not left physical memory yet (for example, if the pages were cached in the waiting list). In such cases, these cached pages can be added back to work created from the waitlist and reserved locations, but there is no need to read the swap file pages. Also, some of the working set pages may have left physical memory after the external swap, while other pages remain cached in the wait list. In such cases, pages that have left physical memory can be read from the swap file and, along with the cached pages, added back to the working set. [0050] In some modalities, this reading of the pages can proceed in a similar way to the readings issued when the process page fails. However, the modalities support reads from large blocks of memory and/or virtual address sequential order, which are therefore more efficient than reading smaller blocks from arbitrary locations in secondary storage. For example, because the pages were written to a contiguous block reserved in the virtual memory file (or in some cases, a number of adjacent blocks, as described below) and were written in sequential order of virtual addresses, the sets of multiple pages they can be read in large blocks and in sequential virtual address order, providing the most efficient read operations. [0051] In some modalities, it can be beneficial for performance and/or usability that the reading of one of the pages exchanged externally is performed efficiently in the system, since a user may be waiting for a previously inactive process or suspended to become active again (for example, when the process is a user request). Taking into account that some modalities may support an optimization in which pages are read into physical memory in an order that is based on how frequently they were accessed prior to switching. For example, before switching pages in a workgroup, the memory manager definition can keep track of which pages were most frequently accessed or most recently accessed by the process, and those pages can be switched earlier than other pages . [0052] For example, a first group of pages can be determined to have been accessed within a certain period of time (for example, in the last 5 seconds before application suspension), a second group of pages can be determined to have been accessed within an adjacent time period (eg between 5 and 10 seconds), a third group can be determined to have been accessed within an adjacent time period (eg between 10 and 15 seconds), and so on. So, when the decision to switch pages is made, the first group can be switched into the first group, the second group can be switched after the first group, the third group can be switched after the second group, and so on. Such optimization can ensure that the process is at least partially active and is used more quickly by a user, given that the pages most recently accessed by the process before the external switch are switched first. [0053] Furthermore, some modalities can support an optimization in which the memory management related to the working set is performed before the swap operation. For example, during the external swap process, the memory manager may proactively "age" some or all of the working set for a subset of pages the working set is determined to be recent. This subset of pages can then be swapped out externally and/or written to the swap file. Internal swapping can achieve some optimization as not all memory pages would have been swapped and fewer pages can therefore be read from the swap file. [0054] At 510, the places reserved for the pages swapped externally in the swap file are released. In some modalities, release of reserved places can also be performed in cases where it is determined in 506 that the pages to be swapped internally have not been written to their reserved places in the swap file. [0055] Figure 6 represents an example process to dynamically manage the exchange file, according to the modalities. In some embodiments, this process is performed by an operating system component 208 such as memory manager 304. At 602, the swap file is initialized. In some modalities, the swap file is initialized to a predetermined size (for example, 256 MB) when the computing system is initialized. In other embodiments, the swap file is initialized when the memory manager determines that swap operations are taking place, and it can be initialized with an initial size sufficient to accommodate the pages to be swapped. [0056] As the memory manager (or other OS component) operates, the determination can be made in 604, the external swap of one or more pages of the working set of a process running in physical memory, as described above in relation to Figure 4A. At 606, the determination can be made whether additional space is needed in the swap file to accommodate the externally swapped pages if they are written to the swap file. If more space is needed, then the swap file size can be dynamically increased by 608. In some embodiments, the decision to increase the swap file size can be based on the memory manager receiving the requests for page swaps that cannot be accommodated due to the current swap file size. Also, some modalities may predict a maximum swap file size. In some embodiments, the initial swap file size and/or dynamic changes in its dimension can be at least in part determined by the size and/or type of processes running on the system. [0057] At 610, external exchange operations and/or write operations are performed as described above. At some point later, a determination can be made at 612 to internally swap one or more of the externally swapped pages, and the reserve locations of one or more swap file pages are released, at 614. release of reserve locations can proceed as described above in relation to figure 5. At 616, a decision can be considered that less space is needed for the next exchange file, for example, the internal exchange operations at 612 and the release of reserve locations, at 614. In such situations, the swap file size can be dynamically decreased by 618. In some embodiments, the decision to decrease the swap file size can be made based on a determination of that swapping pages internally for one or more processes has reduced the need for swap file space. In some embodiments, the free space in the swap file can be reused for later writes of externally swapped pages by the memory manager, and the memory manager can reuse swap file space in a way that minimizes fragmentation (for example , so that pages are saved more preferably, blocks more contiguous and/or sequentially in the virtual address). [0058] Due to the fact that the modalities seek to perform the swap operations in large reads and writes of groups of pages in sequential order, it can be advantageous for the swap file itself to be less fragmented and more contiguous in the secondary storage. To that end, in some embodiments, when the memory manager determines that more swap file space is needed, it may request additional space from the operating system at a given block size (eg 128 MB), which is greater than the amount by which the swap file shrinks (eg 64 MB). By requesting additional space in larger blocks, modalities can reduce the possibility of external swap file fragmentation as the file system tries to find the requested additional swap file space in contiguous space (or fewer contiguous blocks of space ) on secondary storage. [0059] Also, due to the importance of a contiguous swap file, modalities use a swap file that is separate from the paging file, as shown in Figure 3. Using a separate dedicated swap file can increase the possibility that the swap file is created and/or expanded as a single contiguous block (or fewer contiguous blocks). Although some modalities support the use of the pagefile for the swap operations described here, such a situation can increase the possibility that the swap file becomes externally fragmented and therefore less susceptible to large I/O operations. arbitrary data sequential forms and forms of non-contiguous traditional methods of paging that use the paging file. Modalities therefore support the use of a separate dedicated swap file to increase the efficiency of I/O operations given the increased possibility for larger and more sequential I/O operations to use a separate swap file. [0060] Furthermore, modalities can support a swap file that is separate, but not contiguous, distributed among a certain number of segments in secondary storage. Some modalities can support a maximum number of such segments (for example, five) to ensure that the swap file is not too disjointed. In some embodiments, a single swap file can be used to swap pages from all processes in the computer system. In other embodiments, separate swap files can be used for individual processes or groups of processes in the computing system. Conclusion [0061] Although the techniques have been described in language specific to the structural features and/or methodological acts, it should be understood that the appended claims are not necessarily limited to the specific characteristics or acts described. Rather, specific features and acts are disclosed as examples of implementations of such techniques.
权利要求:
Claims (9) [0001] 1. A computer-implemented method comprising the steps of: making (402) a decision to swap one or more pages of a working set of a process into a swap file; identify (404) one or more candidate pages to switch from the set of working pages for the process; and reserving (406) space in the swap file on secondary storage, the reserved space corresponding to a total size of one or more candidate pages; characterized by the fact that: making (412) a decision to record one or more of the candidate pages identified in the swap file, where the decision is made (412) based on the determination that a limit period has passed during which the criteria that led to the exchange decision (402) are still valid; if a decision is made (412) to write one or more of the identified candidate pages, remove (414) one or more of the candidate pages to be written from the working set and write (416) one or more of the candidate pages to be written to places sequentially ordered in placeholder in the swap file, where places are reserved in virtual address order according to the virtual addresses of candidate pages in the working set; and if a decision is made (412) not to record one or more of the identified candidate pages, retain (418) the reserved locations until the switch occurs. [0002] 2. Method according to claim 1, characterized in that one or more of the candidate pages to be written from the working set are removed (414) in an order consistent with the order in which the locations for the pages candidates have been reserved in the swap file. [0003] 3. Method according to claim 1, characterized in that the sequentially ordered locations are assigned to one or more candidate pages contiguously and in sequential virtual address order. [0004] 4. Method according to claim 1, characterized in that it further comprises reading a grouping of at least some of the candidate pages written from the space reserved in the swap file in the working set for the process in virtual address order sequential. [0005] 5. Method according to claim 1, characterized in that it further comprises reserving (406) the contiguous locations in the exchange file. [0006] 6. Method according to claim 1, characterized in that writing (416) one or more of the candidate pages is performed sequentially in virtual address order. [0007] 7. System comprising: at least one processor; a memory (206); and a memory manager (304) which runs through the at least one processor and which operates to: make a decision to swap one or more pages of a working set (322) of a process into a swap file (320) ; identifying one or more candidate pages to switch from the working set (322) of pages to the process running in memory (206); and reserving space in the swap file (320) on secondary storage (232) of the system, the reserved space corresponding to a total size of one or more candidate pages; characterized by the fact that the memory manager (304) is further configured to: make a decision to write one or more of the candidate pages identified in the swap file (320), where the decision is made based on the determination that it has passed a limit period during which the criteria that led to the switch decision are still valid; if the decision is made to write one or more of the identified candidate pages, remove one or more of the candidate pages to be written from the working set (322) and write one or more of the candidate pages to be written in sequentially ordered places in the placeholder in the swap file (320), where the locations are reserved in virtual address order in accordance with the virtual addresses of the candidate pages in the working set; and if a decision is made not to record one or more of the identified candidate pages, retain the reserved locations until the switch occurs. [0008] 8. System according to claim 7, characterized in that it further comprises a policy manager (302) that runs through the at least one processor and that determines the exchange of at least a part of the working set (322 ) by the process, based on a detected state of the process. [0009] 9. System according to claim 7, characterized in that the memory manager (304) still operates to read one or more of the candidate pages written from the space reserved in the swap file (320) in the working set (322) for the process in sequential virtual address order.
类似技术:
公开号 | 公开日 | 专利标题 BR112014014274B1|2021-09-14|METHOD IMPLEMENTED BY COMPUTER AND SYSTEM JP5943095B2|2016-06-29|Data migration for composite non-volatile storage US9785564B2|2017-10-10|Hybrid memory with associative cache US20070022148A1|2007-01-25|Reserving an area of a storage medium for a file CN108733313B|2021-07-23|Method, apparatus and computer readable medium for establishing multi-level flash cache using a spare disk KR20100132244A|2010-12-17|Memory system and method of managing memory system JP4808275B2|2011-11-02|Network boot system JP2018520420A|2018-07-26|Cache architecture and algorithm for hybrid object storage devices US9367247B2|2016-06-14|Memory access requests in hybrid memory system JP5104855B2|2012-12-19|Load distribution program, load distribution method, and storage management apparatus US9547450B2|2017-01-17|Method and apparatus to change tiers US20160011792A1|2016-01-14|Media control device and control method US20210191851A1|2021-06-24|System and method for facilitating reduction of latency and mitigation of write amplification in a multi-tenancy storage drive JP2021009554A|2021-01-28|Computer device, data sharing system, data access method, and program JP2019153030A|2019-09-12|Cache device and cache device control method KR100980667B1|2010-09-07|Storage medium, method, and computer-readable recording medium for improving random write performance of solid state disk JP6273678B2|2018-02-07|Storage device WO2015145707A1|2015-10-01|Method for determining data written to write-once-type storage device JP5287374B2|2013-09-11|Storage system, area collection method, and area collection program JP2008250715A|2008-10-16|Data arrangement management system and method therefor
同族专利:
公开号 | 公开日 CA2858109A1|2013-06-20| US20130159662A1|2013-06-20| MX2014007167A|2014-08-21| CA2858109C|2020-10-06| CN103019948A|2013-04-03| KR20140102679A|2014-08-22| CN103019948B|2016-09-21| MX348643B|2017-06-22| RU2014123660A|2015-12-20| JP2015506041A|2015-02-26| AU2012352178B2|2017-08-10| IN2014CN04049A|2015-07-10| US8832411B2|2014-09-09| WO2013090646A2|2013-06-20| AU2012352178A1|2014-07-03| EP2791806A4|2015-08-05| US20140351552A1|2014-11-27| RU2616545C2|2017-04-17| KR102093523B1|2020-03-25| BR112014014274A2|2017-06-13| EP2791806A2|2014-10-22| US9081702B2|2015-07-14| JP6198226B2|2017-09-20| EP2791806B1|2020-01-22| WO2013090646A3|2013-08-01|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 JPH0555902B2|1987-01-07|1993-08-18|Nippon Electric Co| CA1329432C|1988-11-02|1994-05-10|William Davy|Method of memory and cpu time allocation for a multi-user computer system| US4965717B1|1988-12-09|1993-05-25|Tandem Computers Inc| US5125086A|1989-06-29|1992-06-23|Digital Equipment Corporation|Virtual memory paging apparatus with variable size in-page clusters| US5101485B1|1989-06-29|1996-12-10|Frank L Perazzoli Jr|Virtual memory page table paging apparatus and method| US5394537A|1989-12-13|1995-02-28|Texas Instruments Incorporated|Adaptive page placement memory management system| US5159678A|1990-06-11|1992-10-27|Supercomputer Systems Limited Partnership|Method for efficient non-virtual main memory management| US5247687A|1990-08-31|1993-09-21|International Business Machines Corp.|Method and apparatus for determining and using program paging characteristics to optimize system productive cpu time| CA2285089C|1991-11-12|2000-05-09|Ibm Canada Limited-Ibm Canada Limitee|Logical mapping of data objects using data spaces| US5826057A|1992-01-16|1998-10-20|Kabushiki Kaisha Toshiba|Method for managing virtual address space at improved space utilization efficiency| US5628023A|1993-04-19|1997-05-06|International Business Machines Corporation|Virtual storage computer system having methods and apparatus for providing token-controlled access to protected pages of memory via a token-accessible view| US5802599A|1994-02-08|1998-09-01|International Business Machines Corporation|System and method for allocating storage in a fragmented storage space| US5555399A|1994-07-07|1996-09-10|International Business Machines Corporation|Dynamic idle list size processing in a virtual memory management operating system| US5966735A|1996-11-22|1999-10-12|Digital Equipment Corporation|Array index chaining for tree structure save and restore in a process swapping system| US6681239B1|1996-12-23|2004-01-20|International Business Machines Corporation|Computer system having shared address space among multiple virtual address spaces| US6128713A|1997-09-24|2000-10-03|Microsoft Corporation|Application programming interface enabling application programs to control allocation of physical memory in a virtual memory system| JP3444346B2|1999-01-04|2003-09-08|日本電気株式会社|Virtual memory management method| US6496912B1|1999-03-25|2002-12-17|Microsoft Corporation|System, method, and software for memory management with intelligent trimming of pages of working sets| US6496909B1|1999-04-06|2002-12-17|Silicon Graphics, Inc.|Method for managing concurrent access to virtual memory data structures| US6442664B1|1999-06-01|2002-08-27|International Business Machines Corporation|Computer memory address translation system| US7107299B2|2001-08-14|2006-09-12|Hewlett-Packard Development Company, L.P.|Method and apparatus for managing large numbers of objects having the same property| US7296139B1|2004-01-30|2007-11-13|Nvidia Corporation|In-memory table structure for virtual address translation system with translation units of variable range size| TWI267024B|2004-06-18|2006-11-21|Winbond Electronics Corp|Method and apparatus for connecting LPC bus and serial flash memory| US20060161912A1|2005-01-18|2006-07-20|Barrs John W|Infrastructure for device driver to monitor and trigger versioning for resources| US7334076B2|2005-03-08|2008-02-19|Microsoft Corporation|Method and system for a guest physical address virtualization in a virtual machine environment| US7437529B2|2005-06-16|2008-10-14|International Business Machines Corporation|Method and mechanism for efficiently creating large virtual memory pages in a multiple page size environment| US7409489B2|2005-08-03|2008-08-05|Sandisk Corporation|Scheduling of reclaim operations in non-volatile memory| US7475183B2|2005-12-12|2009-01-06|Microsoft Corporation|Large page optimizations in a virtual machine environment| US20070156386A1|2005-12-30|2007-07-05|Guenthner Russell W|Linearization of page based memory for increased performance in a software emulated central processing unit| US7484074B2|2006-01-18|2009-01-27|International Business Machines Corporation|Method and system for automatically distributing real memory between virtual memory page sizes| US7624240B1|2006-10-17|2009-11-24|Vmware, Inc.|Separate swap files corresponding to different virtual machines in a host computer system| US8024546B2|2008-10-23|2011-09-20|Microsoft Corporation|Opportunistic page largification| US20110153978A1|2009-12-21|2011-06-23|International Business Machines Corporation|Predictive Page Allocation for Virtual Memory System| KR20120066198A|2010-12-14|2012-06-22|삼성전자주식회사|Method of storing data in storage device including volatile memory| US8972696B2|2011-03-07|2015-03-03|Microsoft Technology Licensing, Llc|Pagefile reservations| KR101221241B1|2011-08-26|2013-01-11|린나이코리아 주식회사|Seal packing, and seal packing structure between a cooking device and an outer installation structure| US8832411B2|2011-12-14|2014-09-09|Microsoft Corporation|Working set swapping using a sequentially ordered swap file| JP5851437B2|2013-03-04|2016-02-03|株式会社伊藤製作所|Spacer reinforcement in filter press filter frame|US8832411B2|2011-12-14|2014-09-09|Microsoft Corporation|Working set swapping using a sequentially ordered swap file| US9134954B2|2012-09-10|2015-09-15|Qualcomm Incorporated|GPU memory buffer pre-fetch and pre-back signaling to avoid page-fault| US9524233B2|2013-03-05|2016-12-20|Vmware, Inc.|System and method for efficient swap space allocation in a virtualized environment| US9798673B2|2013-03-14|2017-10-24|Sandisk Technologies Llc|Paging enablement of storage translation metadata| US10102148B2|2013-06-13|2018-10-16|Microsoft Technology Licensing, Llc|Page-based compressed storage management| KR102088403B1|2013-08-08|2020-03-13|삼성전자 주식회사|Storage device, computer system comprising the same, and operating method thereof| US20150242432A1|2014-02-21|2015-08-27|Microsoft Corporation|Modified Memory Compression| US9684625B2|2014-03-21|2017-06-20|Microsoft Technology Licensing, Llc|Asynchronously prefetching sharable memory pages| CN104468745A|2014-11-24|2015-03-25|惠州Tcl移动通信有限公司|Network-based file transfer method and system| US9436608B1|2015-02-12|2016-09-06|International Business Machines Corporation|Memory nest efficiency with cache demand generation| US9632924B2|2015-03-02|2017-04-25|Microsoft Technology Licensing, Llc|Using memory compression to reduce memory commit charge| US10037270B2|2015-04-14|2018-07-31|Microsoft Technology Licensing, Llc|Reducing memory commit charge when compressing memory| US10395029B1|2015-06-30|2019-08-27|Fireeye, Inc.|Virtual system and method with threat protection| US10310854B2|2015-06-30|2019-06-04|International Business Machines Corporation|Non-faulting compute instructions| US10061539B2|2015-06-30|2018-08-28|International Business Machines Corporation|Inaccessibility status indicator| US11113086B1|2015-06-30|2021-09-07|Fireeye, Inc.|Virtual system and method for securing external network connectivity| US10726127B1|2015-06-30|2020-07-28|Fireeye, Inc.|System and method for protecting a software component running in a virtual machine through virtual interrupts by the virtualization layer| US10642753B1|2015-06-30|2020-05-05|Fireeye, Inc.|System and method for protecting a software component running in virtual machine using a virtualization layer| CN106547477B|2015-09-22|2019-07-19|伊姆西公司|Method and apparatus for reducing buffer memory device online| US10387329B2|2016-02-10|2019-08-20|Google Llc|Profiling cache replacement| CN111474913B|2019-01-23|2021-07-20|北京新能源汽车股份有限公司|Program operation monitoring method and device and electric automobile|
法律状态:
2017-12-12| B25A| Requested transfer of rights approved|Owner name: MICROSOFT TECHNOLOGY LICENSING, LLC (US) | 2018-12-04| B06F| Objections, documents and/or translations needed after an examination request according [chapter 6.6 patent gazette]| 2019-12-24| B06U| Preliminary requirement: requests with searches performed by other patent offices: procedure suspended [chapter 6.21 patent gazette]| 2021-06-01| B350| Update of information on the portal [chapter 15.35 patent gazette]| 2021-07-13| B09A| Decision: intention to grant [chapter 9.1 patent gazette]| 2021-09-14| B16A| Patent or certificate of addition of invention granted [chapter 16.1 patent gazette]|Free format text: PRAZO DE VALIDADE: 20 (VINTE) ANOS CONTADOS A PARTIR DE 14/12/2012, OBSERVADAS AS CONDICOES LEGAIS. |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 US13/326,182|2011-12-14| US13/326,182|US8832411B2|2011-12-14|2011-12-14|Working set swapping using a sequentially ordered swap file| PCT/US2012/069602|WO2013090646A2|2011-12-14|2012-12-14|Working set swapping using a sequentially ordered swap file| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|